Foundations of Artificial Intelligence (FAI) — Appunti TiTilda

Indice

Introduction

Hello reader this is a short summary of our notes about the course “FAI”. If you find an error or you think that one point isn’t clear please tell me and I fix it (sorry for my bad english). -NP

First Charpter: Types of AI and Turing Test

We can established four categories that definy different types of AI:

The Turing Test: This test provides us a satisfactory operational definition of Intelligence. A person interrogates a computer and if, in the end, the person cannot understand if he wrote with a computer or another person the AI completes the test with a success.

This test give us 6 skill about the intelligence tested:

Chapter Two: Agent

Agent: Anything that can perceive an enviroment and act upon it.

Rational Agent: This type of agent chooses the actions that maximizes the expected value of his performance measure.

Performance measure: It is the criterion for evalueting the success of our agent (for example in “il gioco dell’oca” more our agent is near to the arrive more us take point on its expected value).

Agent Architecture:

More we go down with the architecture more is the complexity and the power. (Utility is more power than Goal that is more power than Reflex etc.)

Chapter Three: Agent Architecture

All 4 types of agents are rational but with difference Performance Measure.

Simple Reflex Agent

This is the simplest type of agent, our objective is be able to create this by the end of the course.

Reflex Agent with state

An example of this agent is the first prototype of Rumba the automatic vacuum.

Goal-based Agents

An example of possible agent is an agent build for play tic-tac-toe (tris). A pseudo-code of this agent is:

if canwin(player)
    win(player);
if canwin(opponent)
    block(opponent);
if free(center);
    take(center);
if free(corner)
    take(corner);
if free(side)
    take(side);
Utility-based Agent

For example the system of Auto-drive sets up on modern car is a utility-based agent. This bacause he must be control a lot sensors and variable (speed, navigator, tire’s pressure, etc.).

Can a Goal-based agent work like an Utility-based agent ? The answer is no, but an Utility-based can work like a Goal-based it’s enough just maximaze the “happinest”.

A newest type of Agent is the Learning Agents.

Learning Agent

An example of learnig agent is the system used to build the flocking model (based on bird flocks).

The flocking model consist in three basic behaviors:

In this course we learn to build a weak AI.

Weak AI: AI that perform as well as humans.

Strong AI: AI that replicate exactly how humans think.

General AI: AI that can solve wide variety of task.

(Funfacts Open-AI Five build an AI that can compatitive internationaly on “Dota 2”.)

Problem Solving by Search \implies we can find the solution in many way.

Eight Puzzle

8-Puzzle

This is a puzzle game where I have matrix 3x3 complose by 8 numbers and I have to put in order from 1 to 8.

I can rapresent the 8-puzzle in an array:

state = [[0,8,2],[3,4,7],[5,1,6]]

action = (x,y)

struct{
        int from;
        int to;
    }action;

Search Problems

State Space

State Space

State Space as a graph

Optimal Solution: solution with the lowest cost.

There are problem without solution.

If I wanted to crate a state space for the 8-puzzle, How many states would I have?

8-puzzle: 9!(362880) states.

If I generate 100 millions states per second, I would requite 0.036 second to generate the graph. But the 24-puzzle has 25!(10^{25}) states so I would requite more than 109 years \implies I have to use a better method.

Eight-Queens Puzzle

In this game I have to put 8 queens in a chess camp without any queens stay on the territory of the other.

8-Queens Puzzle

Path planning

I have multiple path.

Path Planning

I have to reach the black point on the left from the red at the right.

So for find a solution I don’t create a graph but a tree.

Tree is made of node.

State space is made of states and connections.

The idea is take a state and from it draw the tree with every way the state can take, until I arrive to the final state.

So I build from zero everytime? Yes, I do.

Dear reader, before continuing, try to answer this question, why i prefer build a tree?

For some problem I have an infinite tree (for loop or none solution) so I must find I new method.

The best algorithm for the tree search is the Best-First Search.

Chapter Five: Uninformed Search Startegies

Uninformed Search Startegies: They use only the information conteined in the problem formulation.

The Five Elements of U.S.S.

Evaluation of Search Strategies

Bigger is the b factor, harder is the problem.

Bigger is the d factor, more exploretion require the closest solution.

The Algorithm is the same we do in “API” but here we generate a tree step by step, so we don’t color the node and the loop don’t rappresent a problem for us.

BFS, 7 is the goal state

we start with the first node: 1 and expand the branch so [2, 3] we expand the first node we see: 2 \implies [3, 4, 5], we continue with thise idea [4, 5, 6, 7] \to [5, 6, 7, 2] \to [6, 7, 2] \to [7, 2] now we have first 7 that is the goal state and interrupt the search, so we don’t see the loop.

Complexity: temporal and spatial (worse-case) \Omicron(b^{d+1})

The same we saw in “API” but we generate a tree step by step and here we can have a loop.

Depht-Fisrt Search, 7 is the goal state

[1] \to [2, 3] \to [4, 5, 3] \to [2, 5, 3] \to [4, 5, 5, 3] so in this case we have a loop, now we delete it.

[1] \to [2, 3] \to [4, 5 , 3] \to [5, 3] \to [3] \to [6, 7] \to [7] \ 7 is the goal state.

Complexity: Given the maximum depth of the search tree m:

It is cleae that the DFS Algorithm isn’t a good Tree Search Algorithm.

Ultima modifica:
Scritto da: Niccolò Papini